XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Tom Rainforth

 

Wednesday 20th March 2019

 

Time: 4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Bridging the Gap Between the Bayesian Ideal and Common Practice

The Bayesian paradigm provides a powerful and elegant framework for carrying out principled probabilistic modeling. Unfortunately, the dual challenges of encoding models true to our beliefs and solving the resulting inference problems can severely detriment what is typically achieved in practice.

In this talk, I will discuss two methods for alleviating this dichotomy between the Bayesian ideal and common practice: probabilistic programming and deep generative models. Probabilistic programming provides an expressive and accessible modeling framework, then automates the required computation to draw inferences from the model. Deep generative models, on the other hand, provide a mechanism for relaxing the strict assumptions made by classical Bayesian models.

I will further discuss some recent work on the nesting of Monte Carlo estimators within one another and explain why this has important implications across a wide range of current research areas, including both probabilistic programming and deep generative models.

Biography:

I am a postdoctoral researcher in statistical machine learning at the University of Oxford. My research covers a wide range of topics including probabilistic programming, Monte Carlo methods, variational inference, Bayesian optimization, experimental design, and deep generative models. In addition to my current role as a postdoc working with Yee Whye Teh, I also co-supervise a small group of my own students. Prior to this, I undertook a D.Phil under the supervision of Frank Wood, culminating in the thesis "Automating inference, learning, and design using probabilistic programming." A list of my publication can be found at http://www.robots.ox.ac.uk/~twgr/publications.